351 research outputs found

    A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

    Full text link
    Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) methods have recently attracted much interest since they are fast, highly flexible, and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the data fidelity term corresponds to a least squares criterion and the cost function is replaced by a sequence of stochastic approximations of it. In this context, we propose an online version of an MM subspace algorithm and we study its convergence by using suitable probabilistic tools. Simulation results illustrate the good practical performance of the proposed algorithm associated with a memory gradient subspace, when applied to both non-adaptive and adaptive filter identification problems

    GraphEM: EM algorithm for blind Kalman filtering under graphical sparsity constraints

    Get PDF
    International audienceModeling and inference with multivariate sequences is central in a number of signal processing applications such as acoustics, social network analysis, biomedical, and finance, to name a few. The linear-Gaussian state-space model is a common way to describe a time series through the evolution of a hidden state, with the advantage of presenting a simple inference procedure due to the celebrated Kalman filter. A fundamental question when analyzing a multivariate sequence is the search for relationships between its entries (or the entries of the modeled hidden state), especially when the inherent structure is a non-fully connected graph. In such context, graphical modeling combined with parsimony constraints allows to limit the proliferation of parameters and enables a compact data representation which is easier to interpret by the experts. In this work, we propose a novel expectation-maximization algorithm for estimating the linear matrix operator in the state equation of a linear-Gaussian state-space model. Lasso regularization is included in the M-step, that we solve using a proximal splitting Douglas-Rachford algorithm. Numerical experiments illustrate the benefits of the proposed model and inference technique, named GraphEM, over competitors relying on Granger causality

    Sparse Graphical Linear Dynamical Systems

    Full text link
    Time-series datasets are central in numerous fields of science and engineering, such as biomedicine, Earth observation, and network analysis. Extensive research exists on state-space models (SSMs), which are powerful mathematical tools that allow for probabilistic and interpretable learning on time series. Estimating the model parameters in SSMs is arguably one of the most complicated tasks, and the inclusion of prior knowledge is known to both ease the interpretation but also to complicate the inferential tasks. Very recent works have attempted to incorporate a graphical perspective on some of those model parameters, but they present notable limitations that this work addresses. More generally, existing graphical modeling tools are designed to incorporate either static information, focusing on statistical dependencies among independent random variables (e.g., graphical Lasso approach), or dynamic information, emphasizing causal relationships among time series samples (e.g., graphical Granger approaches). However, there are no joint approaches combining static and dynamic graphical modeling within the context of SSMs. This work proposes a novel approach to fill this gap by introducing a joint graphical modeling framework that bridges the static graphical Lasso model and a causal-based graphical approach for the linear-Gaussian SSM. We present DGLASSO (Dynamic Graphical Lasso), a new inference method within this framework that implements an efficient block alternating majorization-minimization algorithm. The algorithm's convergence is established by departing from modern tools from nonlinear analysis. Experimental validation on synthetic and real weather variability data showcases the effectiveness of the proposed model and inference algorithm

    PENDANTSS: PEnalized Norm-ratios Disentangling Additive Noise, Trend and Sparse Spikes

    Full text link
    Denoising, detrending, deconvolution: usual restoration tasks, traditionally decoupled. Coupled formulations entail complex ill-posed inverse problems. We propose PENDANTSS for joint trend removal and blind deconvolution of sparse peak-like signals. It blends a parsimonious prior with the hypothesis that smooth trend and noise can somewhat be separated by low-pass filtering. We combine the generalized quasi-norm ratio SOOT/SPOQ sparse penalties â„“p/â„“q\ell_p/\ell_q with the BEADS ternary assisted source separation algorithm. This results in a both convergent and efficient tool, with a novel Trust-Region block alternating variable metric forward-backward approach. It outperforms comparable methods, when applied to typically peaked analytical chemistry signals. Reproducible code is provided

    Simulated annealing: a review and a new scheme

    Get PDF
    International audienceFinding the global minimum of a nonconvex optimization problem is a notoriously hard task appearing in numerous applications, from signal processing to machine learning. Simulated annealing (SA) is a family of stochastic optimization methods where an artificial temperature controls the exploration of the search space while preserving convergence to the global minima. SA is efficient, easy to implement, and theoretically sound, but suffers from a slow convergence rate. The purpose of this work is twofold. First, we provide a comprehensive overview on SA and its accelerated variants. Second, we propose a novel SA scheme called curious simulated annealing, combining the assets of two recent acceleration strategies. Theoretical guarantees of this algorithm are provided. Its performance with respect to existing methods is illustrated on practical examples

    Probabilistic modeling and inference for sequential space-varying blur identification

    Get PDF
    International audienceThe identification of parameters of spatially variant blurs given a clean image and its blurry noisy version is a challenging inverse problem of interest in many application fields, such as biological microscopy and astronomical imaging. In this paper, we consider a parametric model of the blur and introduce an 1D state-space model to describe the statistical dependence among the neighboring kernels. We apply a Bayesian approach to estimate the posterior distribution of the kernel parameters given the available data. Since this posterior is intractable for most realistic models, we propose to approximate it through a sequential Monte Carlo approach by processing all data in a sequential and efficient manner. Additionally, we propose a new sampling method to alleviate the particle degeneracy problem, which is present in approximate Bayesian filtering, particularly in challenging concentrated posterior distributions. The considered method allows us to process sequentially image patches at a reasonable computational and memory costs. Moreover, the probabilistic approach we adopt in this paper provides uncertainty quantification which is useful for image restoration. The practical experimental results illustrate the improved estimation performance of our novel approach, demonstrating also the benefits of exploiting the spatial structure the parametric blurs in the considered models

    A stochastic 3MG algorithm with application to 2d filter identification

    Get PDF
    International audienceStochastic optimization plays an important role in solving many problems encountered in machine learning or adaptive processing. In this context, the second-order statistics of the data are often un-known a priori or their direct computation is too intensive, and they have to be estimated online from the related signals. In the context of batch optimization of an objective function being the sum of a data fidelity term and a penalization (e.g. a sparsity promoting function), Majorize-Minimize (MM) subspace methods have recently attracted much interest since they are fast, highly flexible and effective in ensuring convergence. The goal of this paper is to show how these methods can be successfully extended to the case when the cost function is replaced by a sequence of stochastic approximations of it. Simulation results illustrate the good practical performance of the proposed MM Memory Gradient (3MG) algorithm when applied to 2D filter identification
    • …
    corecore